Introduction
‘Take some more tea,’ the March Hare said to Alice, very earnestly.
‘I’ve had nothing yet,’ Alice replied in an offended tone, ‘so I can’t take more.’
‘You mean you can’t take LESS,’ said the Hatter: ‘it’s very easy to take MORE than nothing.’
- Alice’s Adventures in Wonderland, Lewis Carroll
The ability to recognize and respond appropriately to change is crucial to animals’ survival. This ability is often referred to as cognitive or behavioural flexibility (Tello-Ramos et al. 2019), and can be seen in a range of behaviours in many different species: locating food in ants (Czaczkes and Heinze 2015); spatial navigation in male guppies (Lucon-Xiccato and Bisazza 2017); parental care in poison frogs (Ringler et al. 2015); and migratory behaviour in many vertebrates (Winkler et al. 2014). One experimental protocol that has been widely used to demonstrate cognitive/behavioural flexibility is reversal learning.
Reversal learning is essentially a specific type of discrimination learning, which is when animals must learn a specific, arbitrary response to each of multiple stimuli. In a reversal learning experiment an animal is first faced with a simultaneous choice between two stimuli, only one of which is paired with a reward. After a certain number of trials has occurred and the animal has likely learned the association between the correct stimulus and reward, the reward contingencies of the two stimuli are reversed. In a serial reversal learning procedure the reward contingencies reverse repeatedly. An animal that chooses the rewarded stimulus more frequently than the non-rewarded stimulus is judged to perform better on the task; a choice to the non-rewarded stimulus is, in the context of the task, an ‘error.’
The serial reversal learning protocol can be adapted to the behaviour and sensory physiology of many different species, thus allowing comparative research. It has been done using visual stimuli in bumblebees (Strang and Sherry 2014) and guppies (Boussard et al. 2020); visual and spatial stimuli in corvids (Bond, Kamil, and Balda 2007); spatial stimuli in rats (Boulougouris, Dalley, and Robbins 2007), great tits (Hermer et al. 2018a) and gray squirrels (Chow et al. 2015); and olfactory stimuli in rats (Kinoshita et al. 2008). Reversal learning, specifically serial reversal, has been used as an explicit comparative measure of animal ‘intelligence’ (Bitterman 1964): ‘higher’ animals like pigeons, rats and monkeys showed a progressive improvement on the task and ‘lower’ animals like turtles and fish did not. Though the idea of such a hierarchy is outdated, comparative research using reversal learning can reveal important differences in behaviour and learning that have evolved under the selection pressures faced by different species.
Learning in the reversal learning task is clearly demonstrable, and is therefore a meaningful criterion when comparing the performance of different animals. First-order learning happens when an animal learns the stimulus-reward association and changes its behaviour according to the strength of this reinforcement. Higher-order or second-order learning is the learning of rules or strategies; in serial reversal learning the same stimuli are successively paired with a reward and then not paired with a reward, so a strategy can be very profitable. The optimal rule in reversal learning is ‘win-stay; lose-shift,’ which means in practice one ‘error’ per reversal. After learning the task, the perfectly optimal animal will first exclusively choose the stimulus that is paired with reward. At the first choice of this stimulus that does not give a reward (the error), the animal will change its preference and exclusively choose the other stimulus which is now paired with a reward. Progressive ‘improvement’ in this task, where an animal makes fewer and fewer errors per reversal is indication that the animal is learning the rule of reversal, or ‘learning to learn’ (Shettleworth 2010).
Comparative research from the primate literature reveals a very interesting difference in these two types of learning (first- and second-order). Thirteen different species of primates were compared on a visual reversal learning task (Rumbaugh, Savage-Rumbaugh, and Washburn 1996), where they were trained to discriminate a pair of stimuli to either 67% or 84% choice for the rewarded option, and then given a single reversal of reward contingencies. If the primates’ behaviour was driven mainly by first-order learning, they should make fewer errors after the reversal when the criterion was 67% rather than 84%, and vice versa if second-order learning or rule-learning was occurring. The results showed that Prosimian species tended to perform better when trained to 67%; apes when trained to 84%; and monkeys were intermediate.
Progressive improvement on the serial reversal task as more reversals are experienced, evidence of rule-learning, has been shown in many different species: bumblebees (Strang and Sherry 2014), two different species of great tits (Hermer et al. 2018a) and three different species of corvids (Bond, Kamil, and Balda 2007). Corvids in fact show significant transfer across stimulus modality, which is strong evidence for rule-learning.
What performance on the serial reversal task says about the deeper cognitive mechanisms at work, and whether the task is a measure of cognitive or behavioural flexibility, are not completely settled questions. Cognitive flexibility cannot be directly observed; it is inferred to have occurred through changes in behaviour, or behavioural flexibility (Tait et al. 2018). However, just because behavioural flexibility has been observed does not necessarily indicate cognitive flexibility (Dhawan, Tait, and Brown 2019). The term ‘behavioural flexibility’ itself has been used widely but inconsistently, applied to many traits that have different underlying neural mechanisms or do not co-vary (Audet and Lefebvre 2017). Behavioural flexibility in animals has evolved in response to selection pressures from different foraging environments: the flexibility required to deal with seasonal changes in fruit availability is not the same kind of flexibility required to deal with capturing a prey animal intent on escaping.
There is a sense in which the foraging ecology of some nectar-feeding animals is a natural analogue to the serial reversal learning task. The Neotropical bat species Glossophaga commissarisi relies primarily on flower nectar for energy. These bats have remarkably high metabolic rates for their body mass (C. C. Voigt and Winter 1999; Christian C. Voigt, Kelm, and Visser 2006), due to the energetic demands of hovering flight (Winter and Helversen 1998; O. v. Helversen and Reyer 1984). As flowers yield only small droplets of nectar each time they are visited (Christian C. Voigt, Kelm, and Visser 2006), the bats make several hundred flower visits per night. Many plants visited by bats put out only a few flowers every night that bloom for a long time (Kunz and Fenton 2005). A certain time after a flower is emptied the nectar-levels are replenished, so bats can visit the same flower multiple times, relocating it primarily through their excellent spatial memory (Winter 2005; Toelch et al. 2008). The longer the bat waits, the more the flower refills but the higher the likelihood that a competitor could find and exploit the flower first. To make repeated, profitable visits to a flower, a bat must remember both the location of the flower and estimate the flower’s expected reward value. The serial reversal learning task requires an animal to respond to a change in the profitability of available options, and remember all potentially rewarding options: the kind of behavioural flexibility, in short, required in the typical foraging bout of a nectar-feeding bat.
We carried out a serial reversal learning task with wild G. commissarisi individuals. Over three nights the bats were given two potentially rewarding options to choose between. At the start of the night, only one of the options was rewarding and the other was not. After a certain number of visits had been made by the bats, the reward contingencies reversed without any cue: the previously rewarding option was now unrewarding and the previously unrewarding option was rewarding. This reversal occurred five times a night on every night.
Our aims with this experiment were as follows. Firstly, we wanted to test whether that the bats were capable of reversal learning. We believed this to be extremely likely as the behavioural requirements of the task are typical features of the animals’ foraging ecology. Secondly, if the bats demonstrated the ability to respond to the reversals, we wanted to explore how this was reflected in their decision-making. What changes occurred in the relative number of visits made to the rewarding and the non-rewarding options? Thirdly, we wanted to see if the bats were capable of second-order learning, or ‘learning to learn.’ Could the bats learn the rule behind the change in their environment and use the optimal strategy of one error per reversal?
After the analyses described above were done and the data and results examined, we performed further analyses to explore the conclusions of our confirmatory analyses. The difference between these results must be clearly noted. Firstly, we reasoned that there is a difference between the first visits of a night, before any experience of a reversal, and all the subsequent visits after at least one reversal had occurred. We statistically tested for this difference in the bats’ choice behaviour. Secondly, we examined the effect of the asymptotic level of performance (the highest stable proportion of visits to the rewarding option after a reversal) on the performance immediately after a reversal.
Methods
Study site and subjects
The experiment was done from the 28th of June to the 25th of July, 2017, at La Selva Biological Field Station, Province Heredia, Costa Rica. Male and female individuals of the species Glossophaga commissarisi, were captured from the wild for the experiment. The bats were attracted to a particular location in the forest using sugar-water (see Reward below) as bait and then caught in mist-nets. The bats were sexed and the selected individuals were were then taken to two flight-cages (4 x 6 m). The flight-cages had mesh walls and therefore the same climatic conditions as the surrounding environment. A group of four bats at a time were put into a flight cage. All the individuals in a group were the same sex. The bats were weighed and radio frequency identification (RFID) tags that were uniquely assigned to each bat were placed around their necks as collars. The bats were then released into the flight-cages so they could fly within them freely.
Before the start of the experiment the procedure was tested with four females and refinements were made to the procedure. The data from these individuals were not analyzed. 16 bats participated in the main experiment. At the end of the experiment, the RFID collars were removed and the bats were weighed to make sure they were still at a healthy weight. No blinding was done as all the data collection was completely automatized. Two of the bats did not drink a sufficient amount of sugar-water to meet minimum energy requirements and were released before the end of the experiment and not replaced. The data from these two individuals were not analyzed. Thus, 14 bats in total (seven males and seven females) completed the experiment and the data from these animals were analyzed.
Animal experimental procedures were reviewed and permission for animal experimentation and RFID-tagging was granted by Sistema Nacional de Areas de Conservación (SINAC) at the Ministerio de Ambiente y Energía (MINAE), Costa Rica.
Experimental Setup
Reward
The reward received by the bats during the experiment was also their main source of food. The reward was a 17% by weight solution of sugar dissolved in water (prepared fresh everyday), hereafter referred to as ‘nectar.’ The sugar consisted of a 1:1:1 mass-mixture of sucrose, fructose and dextrose. The nectar was thus similar in composition and concentration to the nectar produced by wild chiropterophilous plants (Baker, Baker, and Hodges 1998).
Flower and pump setup
Each flight cage had a square plastic frame in the center (2x2x1.5m). Eight reward-dispensing devices - hereafter referred to as ‘flowers’ - were fixed in a radial pattern on this frame, two on each side of the square (see Figure 1) with a distance of 40 cm between adjacent flowers. This is a distance the bats can discriminate (Thiele and Winter 2005). Each flower had the following parts: an RFID reader mounted on a plastic cylinder around the head of the flower; an infra-red light-barrier beam; an electronic pinch valve through which a PVC tube was placed and fixed to the head of the flower
A stepper-motor pump was placed in the center of the plastic frame in each cage. The pumps contained a 25 mL Hamilton glass syringe (Sigma Aldrich). The precision of the two pumps differed slightly: the pump in Cage 1 delivered 2.11 \(\mu\)L per step of the stepper-motor, and the pump in Cage 2, 3.33 \(\mu\)L per step. The glass syringe was connected to the tubing system of the flowers through five pinch valves. The pinch valves controlled the flow of liquid from the pump to the system and from a reservoir of liquid to the pump. The reservoir (500 mL thread bottle, Roth, Germany) was filled with fresh nectar everyday and connected to the syringe through the valves.
When a tagged bat approached a flower, the individual RFID number was read by the reader. If the bat then poked its nose into the flower and broke the light barrier, it triggered the release of a reward. The pinch valve opened and the pump moved the correct number of pre-programmed steps to dispense nectar to the head of the flower. The bat could easily hover in front of the flower and lick up the nectar. Only when both events occurred, i.e., the RFID reader detected a bat and the light-barrier was broken, would a reward be triggered.
The flowers and the pump were connected to a Lenovo ThinkPad laptop computer, which ran the experimental programs and the programs used to clean and fill the systems: PhenoSoft Control 16, PhenoSoft GmBH, Berlin, Germany. The raw data were recorded to this computer as comma-separated values (CSV) files.
Experimental procedure
Every day at around 1000 h the old nectar was emptied from the system. The system was rinsed and filled with plain water until 1500 h , when it was filled again with fresh nectar. Twice a week the system was filled with 70% ethanol for an hour to prevent microbial growth, then repeatedly rinsed with water.
Four bats were placed in a flight-cage in a group, and all the bats were the same sex. There were four such groups in total, and data were collected simultaneously from two groups, one in each flight-cage. Each bat was uniquely assigned two adjacent flowers on the same side of the square frame, out of the array of eight. These flowers were programmed to reward only one of the four bats in the cage. After the system was filled with fresh nectar at approximately 1700 h, the program was left running for data-collection till the next morning. Thus, the bats could begin visiting the flowers to collect a reward whenever they chose, which was at approximately 1800 h every night.
During the course of the night, when the syringe of the pump had been emptied, the pump re-filled automatically. This event happened only once every night. On the main experimental days this process took 4.5 minutes (SD = ±0.18) for the horizontal pump, and 2.43 minutes (SD = ±0.04) for the vertical pump. About 1 % (SD = ±0.74) of all visits made by the bats over all three experimental nights happened during the pump refill events, and the bats did not receive any reward on these visits, even if they were made to the rewarding flower.
Every night the bats were also given ad-libitum supplemental food: 3.5g of hummingbird food (NektarPlus, Nekton) in 100 mL of water and 3.5g of milk powder (Nido 1+, Nestle) in 100 mL of water. They were also given a small bowl of locally-sourced bee pollen.
Experimental design
The experiment proceeded through the following stages.
Training to use a flower
On the night the naive bats were captured and placed into the flight cages they could receive a reward from any of the flowers whenever they visited them throughout the night. To enable the bats to find the flowers a small cotton pad was placed on the flowers, soaked in di-methyl di-sulphide. This is a chemical attractant produced by many bat-pollinated flowers (O. von Helversen, Winkler, and Bestmann 2000). A small drop of honey was applied to the inside of the flowers to encourage the bats to place their heads inside, break the light-barrier and trigger a nectar reward. By the end of the night all the bats had found the flowers and learned to trigger rewards quickly.
Training to use two specific flowers
After the bats had learned to trigger rewards, the next stage of training involved assigning the bats uniquely to two out of the eight flowers in the array. For an individual animal only the two flowers assigned it would be rewarding from this stage of training until the end of the experiment. Because the bats had already learned to trigger a reward at the flowers, the flowers were not provided with a cotton piece with the chemical attractant and honey was not applied to them. This stage was similar to the previous one, except the bats could only trigger a reward at their assigned flowers.
Alternation
To ensure that the bats were familiar with both flowers assigned to them they went through one final stage of training: forced alternation between the two assigned flowers all night long.
Main Experiment
In this serial reversal learning task the bats had to choose between a flower that gave 40 \(\mu\)L of nectar and one that gave no reward at all. The location of the rewarding flower was not cued, but through the Alternation phase of training each bat knew the locations of both flowers that were potentially rewarding to it. After a bat had made 50 visits in total to the two flowers a reversal occurred: the previously rewarding flower became the non-rewarding flower and vice versa. Importantly, only visits to the two flowers assigned to a bat counted towards the visit tally, not visits to any of the other flowers which were unrewarding to that particular bat. The batch of 50 visits that occurred between two consecutive reversals (when the locations of the rewarding and unrewarding flowers remained stable) was termed a ‘reversal block,’ including the first 50 visits of a night when the bats had not experienced any reversal at all that night. This occurred at regular intervals of 50 visits until the bat either stopped making visits or reached a maximum of 300 visits in a night. After the bat had made 300 rewarded visits it could no longer receive a reward on that experimental night. There were five reversals per night. This stage of the experiment was repeated for three nights in a row. The same flower was the first to be rewarding at the start of every night. Thus, because there were five reversals every night (six blocks of 50 visits), if a bat completed the maximum of 300 visits on a night, the last flower to be rewarding that night was non-rewarding at the start of the next night.
Statistical analysis
All the models were fit in a Bayesian framework using Hamiltonian Monte Carlo in the R package brms (Bürkner 2017), which is a front-end for rstan (Stan Development Team, 2020).
All the visits made by the bats during a night, up to a maximum of 300, were included in the analyses. There were three experimental nights, divided into six blocks of 50 visits each. At the end of the first five blocks a reversal occurred and the end of the last block was the end of data-collection for the night. Each block was further divided into five bins, each consisting of ten visits, in order to examine the bats’ behaviour within each block.
We defined a perseverative visit as a visit to the previously-rewarding option just after the occurrence of a reversal, until the first visit to the newly-rewarding option. By definition this could not happen in the first block of a night. A generalized linear mixed-model was used to investigate the effect of experimental night and reversal block on the number of perseverative visits. A negative-binomial likelihood function was used for this model. Experimental night, reversal block and their interaction were fixed effects and random slopes and intercepts were used to fit regression lines for each individual animal.
We also examined the proportion of visits made to the rewarding flower. This was defined as the ratio of the number of visits to the rewarding flower divided by the total number of visits in a bin. The total number of visits in a bin only included visits made to the two flowers assigned to a bat and any visit to a flower that was not assigned was not considered in the analysis. The model was fit using a binomial likelihood function, with experimental night, block, bin and their interactions as fixed effects; random slopes and intercepts were used to fit regression lines for the individuals.
After examining the above results, further analyses were done. It is important to note that these were exploratory, and the ideas were suggested to us by the results of the intended analyses described above.
A second model was fit to the proportion of visits to the rewarding flower to take into account the fact that the first night and the first block of each night were qualitatively different from the others. On the first night the animals had had no prior experience of any reversals, and during the first block of every night they had not experienced any reversals on that night, and this was reflected in the fit of the posterior predictions made from the first model. The second model of these data was identical to the first except for the addition of experimental night and block as factor variables, with the first night and the first block of every night as one level and the other nights and other blocks of each night as the other level. The two models were compared using leave-one-out cross-validation, implemented in brms using the package loo (Vehtari, Gelman, and Gabry 2017).
We reasoned that a comparison of the bats’ behaviour just before and after a reversal might reveal something of the learning mechanisms at work. If a higher proportion of visits to the rewarding flower just before a reversal is predictive of a higher proportion of visits to the rewarding flower just after, that might potentially indicate that the bats are learning the ‘rule’ behind the reversals. On the other hand, if there is no rule-learning, and the animals’ choice is driven by how much reinforcement was received at the two options, we would expect the opposite: the proportion of visits to the rewarding flower before the reversal is predictive of a lower proportion of visits to it just after as the animals take longer to ‘reverse’ their choices from a highly reinforced option. We took the proportion of visits to the rewarding option, averaged over the last three bins of a reversal block for each individual as the ‘asymptote’ of the bats’ choice behaviour. We fit a generalized linear-mixed model with the proportion of visits to the rewarding option the first bin just after a reversal as the response variable with the fixed effects asymptote, a continuous variable, and night, a factor variable. Random slopes and intercepts were used to fit regression lines for each individual animal.
Weakly informative priors were used. The random intercepts and slopes were given a Normal distribution with a mean of 0, and a standard deviation drawn from a Cauchy distribution with a mean of 0 and a standard deviation of 1. All the models were estimated using 4 chains with a thinning interval of 3, with 1200 warm-up samples and 1300 post-warm-up samples for the model with the first experimental night and block additionally treated differently; 2000 warm-up samples and 2000 post-warm-up samples for the model of the first bin of 10 visits after a reversal; and 1000 warm-up samples and 1000 post-warm-up samples for the others.
Visual inspection of the trace plots, the number of effective samples, the Gelman-Rubin convergence diagnostic (\(\hat R\)) and the calculation of posterior predictions for the same clusters were all used to assess the fit of the models. In all of the models the \(\hat R\) was equal to 1 for all the chains.
The data from all 14 bats that participated in the three experimental nights were included in these models, even though some individuals did not complete all 300 visits on every single night.
All statistical analyses and creation of plots were done in R.
Data availability
All data and analysis code are available online at …..
Discussion
In our experiment wild nectar-feeding bats participated in a serial reversal learning task, the first time, to our knowledge, such an experiment has been done with bats. Bats detected and responded to the changing reward contingencies, evidencing first-order learning. As the animals experienced more reversals on more experimental nights, their performance improved in two significant ways.
A faster switch
After each reversal they were quicker to switch from the previously-rewarding option to the newly-rewarding option. This was demonstrated in two ways. Firstly, after a reversal, the number of perseverative visits, i.e., the visits the bats made to the previously-rewarding option before their first visit to the newly-rewarding option decreases (as shown by the confirmatory analysis - Figures 2 and 3). Secondly, the proportion of visits to the newly-rewarding option in the first bin of ten visits after a reversal increases (as shown in the exploratory analysis - Figure 11).
The bats faced a relatively simple discrimination task: there were only two available options, so if one of them was perceived to be non-rewarding, there was only one possible alternative. The change from one option to another by the animals is less remarkable than the fact that this change happened more and more rapidly: more experience of the reversal led to better exploitation of the particular type of change the environment was undergoing.
The asymptote and immediate post-reversal performance
The confirmatory analyses showed that the proportion of visits to the rewarding option at the end of a reversal block progressively increased, and the number of perseverative errors decreased. The exploratory analyses shed additional light on the effect of the reversals on performance just before and after the reversal: the correlation between the performance just after a reversal and just before a reversal was negative on the first and third nights, and the correlation was close to zero on the second night (Figures 10 and 11). The correlation was most negative on the first night compared to the other two nights, suggesting that by the second night at least some learning had occurred. It must be noted however that on the third night the negative slope could be a statistical artefact, resulting from the fact that the range of the proportion of rewarded visits in asymptote was much smaller than on the first night (0.6 - 1 versus 0.8 - 1).
Our overall interpretation of our results are as follows: the bats learn to take the regular occurrence of reversals into account as they discriminate between potentially rewarding options, showing second-order learning; they show a high preference for the rewarding option but make the occasional exploratory visit to the non-rewarding option.
How do other animal species compare with the bats?
There are several key points of similarity between the bats’ performance on the serial reversal task and that of other animals. Bumblebees showed improvement primarily through a reduction in the perseverative errors Chittka (1998) on a colour reversal task. Notably, this ability to improve at the task seems to be achieved through the large number of trials, just as we had in our experiment. When the task was done with very small trial numbers, both bumblebees (Couvillon and Bitterman 1986) and honeybees (Mota and Giurfa 2010) stopped discriminating and began responding to both the rewarding and non-rewarding stimuli at chance levels.
Several different species of birds also showed performance on this task that was similar to the bats. Corvids (Bond, Kamil, and Balda 2007) showed both a decrease in perseverative errors as well as an increase in performance as they experienced successive reversals. The improvement in performance however was seen only in a colour reversal task and not a spatial reversal task. Two species of Great Tits did even better on a spatial reversal task than the Corvids: both trials within a reversal block and reversal number had a positive effect on the proportion of visits to the rewarding option (Hermer et al. 2018b), and similar performance was seen in pigeons on a colour reversal task (Diekamp, Prior, and Güntürkün 1999). Among mammals, a decrease in perseverative errors was seen both in marmosets on a visual reversal task (Clarke, Robbins, and Roberts 2008) and in rats on a spatial reversal task (Castañé, Theobald, and Robbins 2010).
The bats’ improvement on the serial reversal learning task thus seems to follow a similar pattern to the improvement of several other animal species, potentially indicating similar learning mechanisms. The role of the sensory modality of the experimental stimuli must not be overlooked: we suggest that animals are likely to perform best on a version of the task that uses a sensory modality relevant to their natural foraging ecology and this is consistent with the results of serial reversal experiments with multiple species. Indeed, the transfer of improved performance across stimuli (as seen in the Corvids), is extremely strong evidence of rule-learning, and a potential follow-up experiment to the one we have carried out.
Reversal learning in the wild
Under natural conditions bats exploit the flowers of many different species of plants that vary in the flowering season, flowering duration, the number of flowers that bloom per night, and the quantity of nectar they provide. Observations of foraging behaviour at Agave desmettiana flowers show that the visitation rates of bats to flowers depends heavily on nectar volume (Lemke 1984). Feeding rates are high in the first four hours of the night after sunset, and decline sharply when nectar volumes approach 50% of what they were at the start of the night. However when the flowers were artificially replenished so they never got depleted, bats visit them at a significantly higher frequency, and for 3-5 hours longer than flowers that were depleted normally. The opposite pattern was seen when flowers were prematurely depleted: bats stopped feeding at these flowers earlier than at control flowers, and visited neighbouring flowers with a higher frequency. This is a sensible strategy, given that Agave flowers only replenish their nectar levels at a rate of 30% in 20 hours (equally over day and night). When rates of nectar replenishment are higher, bats are capable of detecting it and returning the same flowers sooner (Tölch 2006), utilizing their excellent spatial memory to find the flower again and adjusting the time interval between successive visits based on the secretion rate of the experimental flowers. In the wild, therefore, flowers alternate between being rewarding and non-rewarding. It would seem that a ‘win-stay; lose-shift’ strategy is ideal for the bats’ natural foraging ecology, just as it is optimal in a serial reversal task. But the natural environment is not a perfect analogue of the experimental task. Flower nectar levels in nature are more likely to decline at a perceptible rate, rather than suddenly drop to zero (Lemke 1984), and the perception of flower nectar volume is subject to Weber’s Law (Toelch and Winter 2007). Thanks to these factors, a foraging bat would need to make more than one visit to a flower (as the optimal ‘win-stay; lose-shift’ strategy requires) to perceive that nectar levels have been depleted so much that future visits will not be profitable. A more complex behavioural strategy might be more profitable in this more complex environment.
Learning mechanisms in a reveral learning task
Previous work has shown that the bats’ behaviour can be described well by a choice-history dependent behavioural model (Nachev et al. 2017). In such a model, an animal has a estimate of the profitability of the available option, based on its memory of past reward at these options. The past experiences are weighted by recency: the most recent experience affects the estimate of the option more than those in the past, and the estimates are updated according to the animal’s learning rate. Our results, both from the confirmatory and the exploratory analyses seem to indicate that that reinforcement learning, which is choice-history dependent (Worthy and Maddox 2014) is playing a role in the animals’ behaviour, especially at the beginning of the experiment before second-order learning has happened. As they learn the task the bats appear to approach the strategy of win-stay; lose-shift, wherein decision-making is dependent only on the last one experience and not on previous choice-history.
The key word here is approach. We suggest that through second-order learning, the bats shift from a reinforcement-learning strategy to a strategy that approximates win-stay; lose-shift. A pure reinforcement-learning strategy might result in high performance, but not in decreased perseveration or improvement in performance immediately after a reversal. The options themselves reverse between the same two consistent rewarding states, so simply updating the estimates should also result in consistent and non-improving performance. A pure win-stay;lose-shift strategy would result in optimum performance. The bats never show such optimal behaviour, but their behaviour becomes a closer approximation of win-stay;lose-shift as the experiment progresses, and their performance between reversals consequently improves.
Our results thus show that nectar-feeding bats are not only capable of higher-order learning, but of flexibly applying different behavioural strategies in response to a predictably changing environment.